87 research outputs found

    An international longitudinal registry of patients with atrial fibrillation at risk of stroke (GARFIELD) : the UK protocol

    Get PDF
    Background Atrial fibrillation (AF) is an independent risk factor for stroke and a significant predictor of mortality. Evidence-based guidelines for stroke prevention in AF recommend antithrombotic therapy corresponding to the risk of stroke. In practice, many patients with AF do not receive the appropriate antithrombotic therapy and are left either unprotected or inadequately protected against stroke. The purpose of the Global Anticoagulant Registry in the FIELD (GARFIELD) is to determine the real-life management and outcomes of patients newly diagnosed with non-valvular AF. Methods/design GARFIELD is an observational, international registry of newly diagnosed AF patients with at least one additional investigator-defined risk factor for stroke. The aim is to enrol 55,000 patients at more than 1000 centres in 50 countries worldwide. Enrolment will take place in five independent, sequential, prospective cohorts; the first cohort includes a retrospective validation cohort. Each cohort will be followed up for 2 years. The UK stands to be a significant contributor to GARFIELD, aiming to enrol 4,582 patients, and reflecting the care environment in which patients with AF are managed. The UK protocol will also focus on better understanding the validity of the two main stroke risk scores (CHADS2 and CHA2DS2VASC) and the HAS-BLED bleeding risk score, in the context of a diverse patient population. Discussion The GARFIELD registry will describe how therapeutic strategies, patient care, and clinical outcomes evolve over time. This study will provide UK-specific comprehensive data that will allow a range of evaluations both at a national level and in relation to global data and contribute to a better understanding of AF management in the UK

    Adjusting bone mass for differences in projected bone area and other confounding variables: an allometric perspective.

    Get PDF
    The traditional method of assessing bone mineral density (BMD; given by bone mineral content [BMC] divided by projected bone area [Ap], BMD = BMC/Ap) has come under strong criticism by various authors. Their criticism being that the projected bone "area" (Ap) will systematically underestimate the skeletal bone "volume" of taller subjects. To reduce the confounding effects of bone size, an alternative ratio has been proposed called bone mineral apparent density [BMAD = BMC/(Ap)3/2]. However, bone size is not the only confounding variable associated with BMC. Others include age, sex, body size, and maturation. To assess the dimensional relationship between BMC and projected bone area, independent of other confounding variables, we proposed and fitted a proportional allometric model to the BMC data of the L2-L4 vertebrae from a previously published study. The projected bone area exponents were greater than unity for both boys (1.43) and girls (1.02), but only the boy's fitted exponent was not different from that predicted by geometric similarity (1.5). Based on these exponents, it is not clear whether bone mass acquisition increases in proportion to the projected bone area (Ap) or an estimate of projected bone volume (Ap)3/2. However, by adopting the proposed methods, the analysis will automatically adjust BMC for differences in projected bone size and other confounding variables for the particular population being studied. Hence, the necessity to speculate as to the theoretical value of the exponent of Ap, although interesting, becomes redundant

    Combining estimates of interest in prognostic modelling studies after multiple imputation: current practice and guidelines

    Get PDF
    Background: Multiple imputation (MI) provides an effective approach to handle missing covariate data within prognostic modelling studies, as it can properly account for the missing data uncertainty. The multiply imputed datasets are each analysed using standard prognostic modelling techniques to obtain the estimates of interest. The estimates from each imputed dataset are then combined into one overall estimate and variance, incorporating both the within and between imputation variability. Rubin's rules for combining these multiply imputed estimates are based on asymptotic theory. The resulting combined estimates may be more accurate if the posterior distribution of the population parameter of interest is better approximated by the normal distribution. However, the normality assumption may not be appropriate for all the parameters of interest when analysing prognostic modelling studies, such as predicted survival probabilities and model performance measures. Methods: Guidelines for combining the estimates of interest when analysing prognostic modelling studies are provided. A literature review is performed to identify current practice for combining such estimates in prognostic modelling studies. Results: Methods for combining all reported estimates after MI were not well reported in the current literature. Rubin's rules without applying any transformations were the standard approach used, when any method was stated. Conclusion: The proposed simple guidelines for combining estimates after MI may lead to a wider and more appropriate use of MI in future prognostic modelling studies

    Comparison of techniques for handling missing covariate data within prognostic modelling studies: a simulation study

    Get PDF
    Background: There is no consensus on the most appropriate approach to handle missing covariate data within prognostic modelling studies. Therefore a simulation study was performed to assess the effects of different missing data techniques on the performance of a prognostic model. Methods: Datasets were generated to resemble the skewed distributions seen in a motivating breast cancer example. Multivariate missing data were imposed on four covariates using four different mechanisms; missing completely at random (MCAR), missing at random (MAR), missing not at random (MNAR) and a combination of all three mechanisms. Five amounts of incomplete cases from 5% to 75% were considered. Complete case analysis (CC), single imputation (SI) and five multiple imputation (MI) techniques available within the R statistical software were investigated: a) data augmentation (DA) approach assuming a multivariate normal distribution, b) DA assuming a general location model, c) regression switching imputation, d) regression switching with predictive mean matching (MICE-PMM) and e) flexible additive imputation models. A Cox proportional hazards model was fitted and appropriate estimates for the regression coefficients and model performance measures were obtained. Results: Performing a CC analysis produced unbiased regression estimates, but inflated standard errors, which affected the significance of the covariates in the model with 25% or more missingness. Using SI, underestimated the variability; resulting in poor coverage even with 10% missingness. Of the MI approaches, applying MICE-PMM produced, in general, the least biased estimates and better coverage for the incomplete covariates and better model performance for all mechanisms. However, this MI approach still produced biased regression coefficient estimates for the incomplete skewed continuous covariates when 50% or more cases had missing data imposed with a MCAR, MAR or combined mechanism. When the missingness depended on the incomplete covariates, i.e. MNAR, estimates were biased with more than 10% incomplete cases for all MI approaches. Conclusion: The results from this simulation study suggest that performing MICE-PMM may be the preferred MI approach provided that less than 50% of the cases have missing data and the missing data are not MNAR

    Standardisation of rates using logistic regression: a comparison with the direct method

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Standardisation of rates in health services research is generally undertaken using the direct and indirect arithmetic methods. These methods can produce unreliable estimates when the calculations are based on small numbers. Regression based methods are available but are rarely applied in practice. This study demonstrates the advantages of using logistic regression to obtain smoothed standardised estimates of the prevalence of rare disease in the presence of covariates.</p> <p>Methods</p> <p>Step by step worked examples of the logistic and direct methods are presented utilising data from BETS, an observational study designed to estimate the prevalence of subclinical thyroid disease in the elderly. Rates calculated by the direct method were standardised by sex and age categories, whereas rates by the logistic method were standardised by sex and age as a continuous variable.</p> <p>Results</p> <p>The two methods produce estimates of similar magnitude when standardising by age and sex. The standard errors produced by the logistic method were lower than the conventional direct method.</p> <p>Conclusion</p> <p>Regression based standardisation is a practical alternative to the direct method. It produces more reliable estimates than the direct or indirect method when the calculations are based on small numbers. It has greater flexibility in factor selection and allows standardisation by both continuous and categorical variables. It therefore allows standardisation to be performed in situations where the direct method would give unreliable results.</p

    Self-testing for cancer: a community survey

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Cancer-related self-tests are currently available to buy in pharmacies or over the internet, including tests for faecal occult blood, PSA and haematuria. Self-tests have potential benefits (e.g. convenience) but there are also potential harms (e.g. delays in seeking treatment). The extent of cancer-related self-test use in the UK is not known. This study aimed to determine the prevalence of cancer-related self-test use.</p> <p>Methods</p> <p>Adults (n = 5,545) in the West Midlands were sent a questionnaire that collected socio-demographic information and data regarding previous and potential future use of 18 different self-tests. Prevalence rates were directly standardised to the England population. The postcode based Index of Multiple Deprivation 2004 was used as a proxy measure of deprivation.</p> <p>Results</p> <p>2,925 (54%) usable questionnaires were returned. 1.2% (95% CI 0.83% to 1.66%) of responders reported having used a cancer related self test kit and a further 36% reported that they would consider using one in the future. Logistic regression analyses suggest that increasing age, deprivation category and employment status were associated with cancer-related self-test kit use.</p> <p>Conclusion</p> <p>We conclude that one in 100 of the adult population have used a cancer-related self-test kit and over a third would consider using one in the future. Self-test kit use could alter perceptions of risk, cause psychological morbidity and impact on the demand for healthcare.</p

    Comparison of imputation methods for handling missing covariate data when fitting a Cox proportional hazards model: a resampling study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The appropriate handling of missing covariate data in prognostic modelling studies is yet to be conclusively determined. A resampling study was performed to investigate the effects of different missing data methods on the performance of a prognostic model.</p> <p>Methods</p> <p>Observed data for 1000 cases were sampled with replacement from a large complete dataset of 7507 patients to obtain 500 replications. Five levels of missingness (ranging from 5% to 75%) were imposed on three covariates using a missing at random (MAR) mechanism. Five missing data methods were applied; a) complete case analysis (CC) b) single imputation using regression switching with predictive mean matching (SI), c) multiple imputation using regression switching imputation, d) multiple imputation using regression switching with predictive mean matching (MICE-PMM) and e) multiple imputation using flexible additive imputation models. A Cox proportional hazards model was fitted to each dataset and estimates for the regression coefficients and model performance measures obtained.</p> <p>Results</p> <p>CC produced biased regression coefficient estimates and inflated standard errors (SEs) with 25% or more missingness. The underestimated SE after SI resulted in poor coverage with 25% or more missingness. Of the MI approaches investigated, MI using MICE-PMM produced the least biased estimates and better model performance measures. However, this MI approach still produced biased regression coefficient estimates with 75% missingness.</p> <p>Conclusions</p> <p>Very few differences were seen between the results from all missing data approaches with 5% missingness. However, performing MI using MICE-PMM may be the preferred missing data approach for handling between 10% and 50% MAR missingness.</p

    Identification of Rhoptry Trafficking Determinants and Evidence for a Novel Sorting Mechanism in the Malaria Parasite Plasmodium falciparum

    Get PDF
    The rhoptry of the malaria parasite Plasmodium falciparum is an unusual secretory organelle that is thought to be related to secretory lysosomes in higher eukaryotes. Rhoptries contain an extensive collection of proteins that participate in host cell invasion and in the formation of the parasitophorous vacuole, but little is known about sorting signals required for rhoptry protein targeting. Using green fluorescent protein chimeras and in vitro pull-down assays, we performed an analysis of the signals required for trafficking of the rhoptry protein RAP1. We provide evidence that RAP1 is escorted to the rhoptry via an interaction with the glycosylphosphatidyl inositol-anchored rhoptry protein RAMA. Once within the rhoptry, RAP1 contains distinct signals for localisation within a sub-compartment of the organelle and subsequent transfer to the parasitophorous vacuole after invasion. This is the first detailed description of rhoptry trafficking signals in Plasmodium

    BΓΆrjeson–Forssman–Lehmann syndrome: Delineating the clinical and allelic spectrum in 14 new families

    Get PDF
    BΓΆrjeson-Forssman-Lehmann syndrome (BFLS) is an X-linked intellectual disability syndrome caused by variants in the PHF6 gene. We ascertained 19 individuals from 15 families with likely pathogenic or pathogenic PHF6 variants (11 males and 8 females). One family had previously been reported. Six variants were novel. We analysed the clinical and genetic findings in our series and compared them with reported BFLS patients. Affected males had classic features of BFLS including intellectual disability, distinctive facies, large ears, gynaecomastia, hypogonadism and truncal obesity. Carrier female relatives of affected males were unaffected or had only mild symptoms. The phenotype of affected females with de novo variants overlapped with the males but included linear skin hyperpigmentation and a higher frequency of dental, retinal and cortical brain anomalies. Complications observed in our series included keloid scarring, digital fibromas, absent vaginal orifice, neuropathy, umbilical hernias, and talipes. Our analysis highlighted sex-specific differences in PHF6 variant types and locations. Affected males often have missense variants or small in-frame deletions while affected females tend to have truncating variants or large deletions/duplications. Missense variants were found in a minority of affected females and clustered in the highly constrained PHD2 domain of PHF6. We propose recommendations for the evaluation and management of BFLS patients. These results further delineate and extend the genetic and phenotypic spectrum of BFLS

    Attentional bias retraining in cigarette smokers attempting smoking cessation (ARTS): study protocol for a double bline randomised controlled trial

    Get PDF
    YesSmokers attend preferentially to cigarettes and other smoking-related cues in the environment, in what is known as an attentional bias. There is evidence that attentional bias may contribute to craving and failure to stop smoking. Attentional retraining procedures have been used in laboratory studies to train smokers to reduce attentional bias, although these procedures have not been applied in smoking cessation programmes. This trial will examine the efficacy of multiple sessions of attentional retraining on attentional bias, craving, and abstinence in smokers attempting cessation. This is a double-blind randomised controlled trial. Adult smokers attending a 7-session weekly stop smoking clinic will be randomised to either a modified visual probe task with attentional retraining or placebo training. Training will start 1 week prior to quit day and be given weekly for 5 sessions. Both groups will receive 21 mg transdermal nicotine patches for 8–12 weeks and withdrawal-orientated behavioural support for 7 sessions. Primary outcome measures are the change in attentional bias reaction time and urge to smoke on the Mood and Physical Symptoms Scale at 4 weeks post-quit. Secondary outcome measures include differences in withdrawal, time to first lapse and prolonged abstinence at 4 weeks post-quit, which will be biochemically validated at each clinic visit. Follow-up will take place at 8 weeks, 3 months and 6 months post-quit. This is the first randomised controlled trial of attentional retraining in smokers attempting cessation. This trial could provide proof of principle for a treatment aimed at a fundamental cause of addiction.National Institute for Health Research (NIHR) Doctoral Research Fellowship (DRF) awarded to RB (DRF-2009-02-15
    • …
    corecore